About the Survey

Summary
This is a description of the survey, and its processing and analysis method. Some outlines of the statistical method used.


Report
The CMAR survey was emailed to 403 persons directly in the late spring and summer 2007 and some others may have been distributed through subcontractor organizations. We received 53 responses, for a 13% response rate. All the persons who received emails directly were believed to be active in governmental CMAR projects. There names were gleaned from “sign-in sheets” from CMAR pre-proposal meetings, personnel lists from governmental Owners’ web sites, and “rolodex” lists from CMAR projects sent to me by project managers I queried. All of the referrals were related to higher education institution projects. So I believe most of the respondents had higher education CMAR project experience, but how much related to their other governmental projects is not known. Several subcontractor organizations offered to distribute the survey to their members, but I did not receive many responses following these offers.


The survey responses generally conformed to the format suggested. About half the respondents sent in text comments. I took the respondents answers and entered them in a master excel spreadsheet, by assigning a code to the non-numeric questions and data. All the text comments were likewise stored in Excel master. I then made two sub-master worksheets, one sorted by employer and one sorted by amount of governmental CMAR experience. I grouped those with one or two CMAR projects into one category. Two respondents did have any governmental CMAR, but had several private CMAR jobs, so I included them.


I then wrote simple Excel programs to count the responses in each category. If the data were numeric, the next step was to run a one-way ANOVA to test the significance. Generally I set the confidence level at 95%. If the ANOVA showed a significant difference, I then ran some t-tests between all the groups. I used Bonferoni’s method of reducing the significance for the number of groups. In many cases the groups showed a “trend” that was clear, but not rigorously significant. In that case I simply call it a “trend.”


It would have been useful to run a two-way ANOVA, but with only 50 responses I didn’t believe it would be significant. The CMAR experience was fairly evenly distributed through the employer categories, except that there were no CMs with more than 10 governmental CMAR projects. A few did have more than 10 non-governmental projects.


For the non-numeric data I counted the various responses, then did appropriate statistics, often expressing the percent of responses in each category, i.e., “25% said the work was a waste of time.” Often a table was used to display the percents.


I wanted to use some judgment in compiling and expressing the data without biasing it. For example, combining certain responses, i.e., “none and seldom,” where such better demonstrated a trend. I may have exhibited some bias towards what I felt was an interesting conclusion, but if so, the bias is patent. I tried to separate my comments into “observations” of the data, which direct followed directly from the data, from “analysis,” which contains my opinion.


There were many thoughtful comments. I tried to select one or two comments that best (in my opinion) expressed a succinct opinion about the issue and placed the rest of the comments in an appendix. I tired to be fair if there were two points of view.